Goto

Collaborating Authors

 Subject-Specific Education


He Couldn't Land a Job Interview. Was AI to Blame?

WIRED

Armed with some Python and a white-hot sense of injustice, one medical student spent six months trying to figure out whether an algorithm trashed his job application. It was mid-October, peak leaf-peeping season in Hanover, New Hampshire, and Chad Markey was on a rare break between clinical rotations during his last year of medical school. He should have been inhaling Green Mountain air and gossiping with his Dartmouth classmates about life after graduation. In a few months, they'd all be going their separate ways to start residency training at hospitals around the country. Instead, Markey was alone in his apartment, deep down a rabbit hole, preparing to go to war. He'd wake each morning, eat breakfast, open his laptop at the kitchen table or settle into the tan armchair with the good back support, and start coding . Some days, he wouldn't notice the sun had gone down until one of his roommates came home and asked why the lights weren't on. For days, Markey had been scrolling through a Discord group about medical residency, a font of crowdsourced knowledge where students report back to their peers on every stage of the application and selection process. He'd watched as other students, lots of them, posted about the interview invitations they'd received.


This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

WIRED

This Scammer Used an AI-Generated MAGA Girl to Grift'Super Dumb' Men A med student says he's made thousands of dollars selling photos and videos of a young conservative woman he created using generative tools. Like many medical school students, Sam was broke. The 22-year-old aspiring orthopedic surgeon from northern India got some money from his parents, but he says he spent most of it subsidizing his licensing exams, and he's still saving up to hopefully emigrate to the US after graduation. So he started searching for ways to make additional money online. Sam, who requested a pseudonym to avoid jeopardizing his medical career and immigration status, tried a few things, with varying degrees of legitimacy and success.


A Quantum Leap for the Turing Award

WIRED

Charles Bennett and Gilles Brassard pioneered quantum information theory. Now they've been awarded the highest honor in computer science. Today it's widely acknowledged that the future of computing will involve the quantum realm . Companies like Google, Microsoft, IBM, and a few well-funded startups are frantically building quantum computers and routinely claiming advances that seem to bring this exotic, world-changing technology within reach. In 1979 all of this was unthinkable.


Counterfactual Fairness

Neural Information Processing Systems

Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.


Dialog-based Language Learning

Neural Information Processing Systems

A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of (Weston et al., 2015) and large-scale question answering from (Dodge et al., 2015). We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher's response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all.



Agent Planning with World Knowledge Model

Neural Information Processing Systems

Imitating humans' mental world knowledge model which provides global prior knowledge before the task and maintains local dynamic knowledge during the task, in this paper, we introduce parametric W orld K nowledge M odel ( WKM) to facilitate agent



Scaling Sign Language Translation

Neural Information Processing Systems

Sign language translation (SL T) addresses the problem of translating information from a sign language in video to a spoken language in text. Existing studies, while showing progress, are often limited to narrow domains and/or few sign languages and struggle with open-domain tasks. In this paper, we push forward the frontier of SL T by scaling pretraining data, model size, and number of translation directions. We perform large-scale SL T pretraining on different data including 1) noisy multilingual Y ouTube SL T data, 2) parallel text corpora, and 3) SL T data augmented by translating video captions to other languages with off-the-shelf machine translation models. We unify different pretraining tasks with task-specific prompts under the encoder-decoder architecture, and initialize the SL T model with pretrained (m/By)T5 models across model sizes. SL T pretraining results on How2Sign and FLEURS-ASL#0 (ASL to 42 spoken languages) demonstrate the significance of data/model scaling and cross-lingual cross-modal transfer, as well as the feasibility of zero-shot SL T. We finetune the pretrained SL T models on 5 downstream open-domain SL T benchmarks covering 5 sign languages. Experiments show substantial quality improvements over the vanilla baselines, surpassing the previous state-of-the-art (SOT A) by wide margins.


Auslan-Daily: Australian Sign Language Translation for Daily Communication and News

Neural Information Processing Systems

Considering different geographic regions generally have their own native sign languages, it is valuable to establish corresponding SL T datasets to support related communication and research. Auslan, as a sign language specific to Australia, still lacks a dedicated large-scale dataset for SL T.